Meet SHRDLU (1968-1970)
Long before large language models like GPT, there was SHRDLU. Developed by Terry Winograd at MIT, it was a groundbreaking experiment in "Symbolic AI" (also known as GOFAI: Good Old-Fashioned AI).
1. The "Micro-World" Hypothesis
Instead of trying to understand the entire universe (which is messy), SHRDLU lived in a restricted Blocks World. This world contained only specific objects: bricks, pyramids, and a box.
- Symbol Grounding: Because the world was small, every word ("red", "move", "on") could be perfectly defined. The program "understood" a block because it had a direct pointer to that object in memory.
2. Capabilities
- Understanding: "Put the red block on the green block."
- Reasoning: "I can't, the green block is covered." (It understood physics constraints).
- Memory: "Why did you do that?" -> "Because you asked me to."
3. Historical Context
It ran on a DEC PDP-10 mainframe (1 MIPS speed) and used a vector graphics display. The logic was written in MacLisp and Micro-Planner.
Python Representation (The "State")
Title
Caption goes here.